Goto

Collaborating Authors

 Uruguay


Earth's largest otters have chocolate bar-sized babies

Popular Science

Environment Animals Wildlife Endangered Species Earth's largest otters have chocolate bar-sized babies Chester Zoo celebrates the birth of giant otter triplets. More information Adding us as a Preferred Source in Google by using this link indicates that you would like to see more of our content in Google News results. While they only weigh 7.1 ounces as babies, giant otters can grow to six-feet-long and weigh up to 71 pounds. Breakthroughs, discoveries, and DIY tips sent six days a week. It turns out that giant otter () newborns are actually quite small, weighing just around 7.1 ounces.


Conformal Robust Set Estimation

Cholaquidis, Alejandro, Joly, Emilien, Moreno, Leonardo

arXiv.org Machine Learning

Conformal prediction provides finite-sample, distribution-free coverage under exchangeability, but standard constructions may lack robustness in the presence of outliers or heavy tails. We propose a robust conformal method based on a non-conformity score defined as the half-mass radius around a point, equivalently the distance to its $(\lfloor n/2\rfloor+1)$-nearest neighbour. We show that the resulting conformal regions are marginally valid for any sample size and converge in probability to a robust population central set defined through a distance-to-a-measure functional. Under mild regularity conditions, we establish exponential concentration and tail bounds that quantify the deviation between the empirical conformal region and its population counterpart. These results provide a probabilistic justification for using robust geometric scores in conformal prediction, even for heavy-tailed or multi-modal distributions.








Scaling Sign Language Translation

Neural Information Processing Systems

Sign language translation (SL T) addresses the problem of translating information from a sign language in video to a spoken language in text. Existing studies, while showing progress, are often limited to narrow domains and/or few sign languages and struggle with open-domain tasks. In this paper, we push forward the frontier of SL T by scaling pretraining data, model size, and number of translation directions. We perform large-scale SL T pretraining on different data including 1) noisy multilingual Y ouTube SL T data, 2) parallel text corpora, and 3) SL T data augmented by translating video captions to other languages with off-the-shelf machine translation models. We unify different pretraining tasks with task-specific prompts under the encoder-decoder architecture, and initialize the SL T model with pretrained (m/By)T5 models across model sizes. SL T pretraining results on How2Sign and FLEURS-ASL#0 (ASL to 42 spoken languages) demonstrate the significance of data/model scaling and cross-lingual cross-modal transfer, as well as the feasibility of zero-shot SL T. We finetune the pretrained SL T models on 5 downstream open-domain SL T benchmarks covering 5 sign languages. Experiments show substantial quality improvements over the vanilla baselines, surpassing the previous state-of-the-art (SOT A) by wide margins.